1,021 research outputs found

    Using Graph Properties to Speed-up GPU-based Graph Traversal: A Model-driven Approach

    Get PDF
    While it is well-known and acknowledged that the performance of graph algorithms is heavily dependent on the input data, there has been surprisingly little research to quantify and predict the impact the graph structure has on performance. Parallel graph algorithms, running on many-core systems such as GPUs, are no exception: most research has focused on how to efficiently implement and tune different graph operations on a specific GPU. However, the performance impact of the input graph has only been taken into account indirectly as a result of the graphs used to benchmark the system. In this work, we present a case study investigating how to use the properties of the input graph to improve the performance of the breadth-first search (BFS) graph traversal. To do so, we first study the performance variation of 15 different BFS implementations across 248 graphs. Using this performance data, we show that significant speed-up can be achieved by combining the best implementation for each level of the traversal. To make use of this data-dependent optimization, we must correctly predict the relative performance of algorithms per graph level, and enable dynamic switching to the optimal algorithm for each level at runtime. We use the collected performance data to train a binary decision tree, to enable high-accuracy predictions and fast switching. We demonstrate empirically that our decision tree is both fast enough to allow dynamic switching between implementations, without noticeable overhead, and accurate enough in its prediction to enable significant BFS speedup. We conclude that our model-driven approach (1) enables BFS to outperform state of the art GPU algorithms, and (2) can be adapted for other BFS variants, other algorithms, or more specific datasets

    Integration of Blockchain and Auction Models: A Survey, Some Applications, and Challenges

    Get PDF
    In recent years, blockchain has gained widespread attention as an emerging technology for decentralization, transparency, and immutability in advancing online activities over public networks. As an essential market process, auctions have been well studied and applied in many business fields due to their efficiency and contributions to fair trade. Complementary features between blockchain and auction models trigger a great potential for research and innovation. On the one hand, the decentralized nature of blockchain can provide a trustworthy, secure, and cost-effective mechanism to manage the auction process; on the other hand, auction models can be utilized to design incentive and consensus protocols in blockchain architectures. These opportunities have attracted enormous research and innovation activities in both academia and industry; however, there is a lack of an in-depth review of existing solutions and achievements. In this paper, we conduct a comprehensive state-of-the-art survey of these two research topics. We review the existing solutions for integrating blockchain and auction models, with some application-oriented taxonomies generated. Additionally, we highlight some open research challenges and future directions towards integrated blockchain-auction models

    Using SAML and XACML for Complex Resource Provisioning in Grid Based Applications

    Full text link
    This paper presents ongoing research and current results on the development of flexible access control infrastructure for complex resource provisioning (CRP) in Grid-based applications. The paper proposes a general CRP model and specifies major requirements to the Authorisation (AuthZ) service infrastructure to support multidomain CRP, focusing on two main issues – policy expression for complex resource models and AuthZ session support. The paper provides suggestions about using XACML and its profiles to describe access control policies to complex resources and briefly describes proposed XML based AuthZ ticket format to support extended AuthZ session context. Additionally, the paper discusses what specific functionality can be added to the gLite Java Authorisation Framework (gJAF), to handle dynamic security context including AuthZ session support. The paper is based on experiences gained from major Grid based and Grid oriented projects such as EGEE

    Simulating the universe on an intercontinental grid of supercomputers

    Full text link
    Understanding the universe is hampered by the elusiveness of its most common constituent, cold dark matter. Almost impossible to observe, dark matter can be studied effectively by means of simulation and there is probably no other research field where simulation has led to so much progress in the last decade. Cosmological N-body simulations are an essential tool for evolving density perturbations in the nonlinear regime. Simulating the formation of large-scale structures in the universe, however, is still a challenge due to the enormous dynamic range in spatial and temporal coordinates, and due to the enormous computer resources required. The dynamic range is generally dealt with by the hybridization of numerical techniques. We deal with the computational requirements by connecting two supercomputers via an optical network and make them operate as a single machine. This is challenging, if only for the fact that the supercomputers of our choice are separated by half the planet, as one is located in Amsterdam and the other is in Tokyo. The co-scheduling of the two computers and the 'gridification' of the code enables us to achieve a 90% efficiency for this distributed intercontinental supercomputer.Comment: Accepted for publication in IEEE Compute

    Security Services Lifecycle Management in On-Demand Infrastructure Services Provisioning

    Full text link
    require high-performance and complicated network and computer infrastructure to support distributed collaborating groups of researchers and applications that should be provisioned on-demand. The effective use and management of the dynamically provisioned services can be achieved by using the Service Delivery Framework (SDF) proposed by TeleManagement Forum that provides a good basis for defining the whole services life cycle management and supporting infrastructure services. The paper discusses conceptual issues, basic requirements and practical suggestions for provisioning consistent security services as a part of the general e-Science infrastructure provisioning, in particular Grid and Cloud based. The proposed Security Services Lifecycle Management (SSLM) model extends the existing frameworks with additional stages such as “Reservation Session Binding ” and “Registration and Synchronisation ” that specifically target such security issues as the provisioned resources restoration, upgrade or migration and provide a mechanism for remote executing environment and data protection by binding them to the session context. The paper provides a short overview of the existing standards and technologies and refers to the on-going projects and experience in developing dynamic distributed security services

    Toward Executable ScientiïŹc Publications

    Get PDF
    AbstractReproducibility of experiments is considered as one of the main principles of the scientiïŹc method. Recent developments in data and computation intensive science, i.e. e-Science, and state of the art in Cloud computing provide the necessary components to preserve data sets and re-run code and software that create research data. The Executable Paper (EP) concept uses state of the art technology to include data sets, code, and software in the electronic publication such that readers can validate the presented results. In this paper we present how to advance current state of the art to preserve, data sets, code, and software that create research data, the basic components of an execution platform to preserve long term compatibility of EP, and we identify a number of issues and challenges in the realization of EP

    Developing and operating time critical applications in clouds: the state of the art and the SWITCH approach

    Get PDF
    Cloud environments can provide virtualized, elastic, controllable and high quality on-demand services for supporting complex distributed applications. However, the engineering methods and software tools used for developing, deploying and executing classical time critical applications do not, as yet, account for the programmability and controllability provided by clouds, and so time critical applications cannot yet benefit from the full potential of cloud technology. This paper reviews the state of the art of technologies involved in developing time critical cloud applications, and presents the approach of a recently funded EU H2020 project: the Software Workbench for Interactive, Time Critical and Highly self-adaptive cloud applications (SWITCH). SWITCH aims to improve the existing development and execution model of time critical applications by introducing a novel conceptual model—the application-infrastructure co-programming and control model—in which application QoS and QoE, together with the programmability and controllability of cloud environments, is included in the complete application lifecycle

    Intercloud Architecture Framework for Heterogeneous Cloud Based Infrastructure Services Provisioning On-Demand

    Full text link
    Abstract—This paper presents on-going research to develop the Intercloud Architecture Framework (ICAF) that addresses problems in multi-provider multi-domain heterogeneous cloud based infrastructure services and applications integration and interoperability, to allow their on-demand provisioning. The paper refers to existing standards and ongoing standardisation activity in Cloud Computing, in particular, recently published NIST Cloud Computing Reference Architecture (CCRA) and ITU-T JCA-Cloud activity. The proposed ICAF defines four complementary components addressing Intercloud integration and interoperability: multi-layer Cloud Services Model that combines commonly adopted cloud service models, such as IaaS, PaaS, SaaS, in one multilayer model with corresponding inter-layer interfaces; Intercloud Control and Management Plane that supports cloud based applications interaction

    Time critical requirements and technical considerations for advanced support environments for data-intensive research

    Get PDF
    Data-centric approaches play an increasing role in many scientific domains, but in turn rely increasingly heavily on advanced research support environments for coordinating research activities, providing access to research data, and choreographing complex experiments. Critical time constraints can be seen in several application scenarios e.g., event detection for disaster early warning, runtime execution steering, and failure recovery. Providing support for executing such time critical research applications is still a challenging issue in many current research support environments however. In this paper, we analyse time critical requirements in three key kinds of research support environment—Virtual Research Environments, Research Infrastructures, and e-Infrastructures—and review the current state of the art. An approach for dynamic infrastructure planning is discussed that may help to address some of these requirements. The work is based on requirements collection recently performed in three EU H2020 projects: SWITCH, ENVRIPLUS and VRE4EIC
    • 

    corecore